
Strong AI is artificial intelligence that matches or exceeds human intelligence = the intelligence of a machine 
that can successfully perform any intellectual task that a human being can.  
It is a primary goal of artificial intelligence research and an important topic for science fiction writers and 
futurists. Strong AI is also referred to as "artificial general intelligence"  or as the ability to perform 
"general intelligent action". . This the term they use for "human-level" intelligence in the physical symbol 
system hypothesis. Science fiction associates strong AI with such human traits as consciousness, sentience, 
sapience and self-awareness.
Some references emphasize a distinction between strong AI and "applied AI".

Strong AI is the use of software to study or accomplish specific problem solving or reasoning tasks that do 
not encompass (or in some cases are completely outside of) the full range of human cognitive abilities.

Many different definitions of intelligence have been proposed (such as being able to pass the Turing test) but 
there is to date no definition that satisfies everyone .AI founder John McCarthy writes: "we cannot yet 
characterize in general what kinds of computational procedures we want to call intelligent." 
However, there is wide agreement among artificial intelligence researchers that intelligence is required to do 
the following:
* reason, use strategy, solve puzzles, and make judgments under uncertainty;
* represent knowledge, including commonsense knowledge;
* plan;
* learn;
* communicate in natural language;
* and integrate all these skills towards common goals.
Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move a
nd manipulate objects) in the world where intelligent behaviour is to be observed.
This would include an ability to detect and respond to hazard.
Some sources consider "salience" (the capacity for recognising importance) as an important trait. 
Salience is thought to be part of how humans evaluate novelty so are likely to be important in some degree, but 
not necessarily at a human level.  Many interdisciplinary approaches to intelligence (e.g. cognitive science, 
computational intelligence and decision making) tend to emphasise the need to consider additional traits such as 
imagination (taken as the ability to form mental images and concepts that were not programmed in)  and autonomy.
Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, 
decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI 
which play a major role in science fiction and the ethics of artificial intelligence:
* consciousness: To have subjective experience and thought. Note that consciousness is difficult to define. 
A popular definition, due to Thomas Nagel, is that it "feels like" something to be conscious. If we are not 
conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does 
it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" 
Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. 
* self-awareness: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.
* sentience: The ability to "feel" perceptions or emotions subjectively.
* sapience: The capacity for wisdom.
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, 
analogous to the rights of animals. Also, Bill Joy, among others, argues a machine with these traits may 
be a threat to human life or dignity.  It remains to be shown whether any of these traits are necessary for 
strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. 
If a machine is built with a device that simulates the neural correlates of consciousness, would it 
automatically have self-awareness? It is also possible that some of these properties, such as sentience, 
naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties 
to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may 
be sufficient for sentience, rather than the other way around.
Modern AI research began in the mid 1950s.  The first generation of AI researchers were convinced that strong 
AI was possible and that it would exist in just a few decades. As AI pioneer Herbert Simon wrote in 1965: 
"machines will be capable, within twenty years, of doing any work a man can do."  
Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who 
accurately embodied what AI researchers believed they could create by the year 2001. 
Of note is the fact that AI pioneer Marvin Minsky was a consultant Scientist on the Set, having himself said 
on the subject in 1967, "Within a generation...the problem of creating 'artificial intelligence' will 
substantially be solved."  Minsky states that he was misquoted. However, in the early 1970s, it became 
obvious that researchers had grossly underestimated the difficulty of the project. The agencies that funded 
AI became skeptical of strong AI and put researchers under increasing pressure to produce useful technology, 
or "applied AI".The Lighthill report specifically criticized AI's "grandiose objectives" and led the 
dismantling of AI research in England. 
In the U.S., DARPA became determined to fund only "mission-oriented direct research, rather than basic 
undirected research". As the 1980s began, Japan's fifth generation computer project revived interest in strong AI,
 setting out a ten year timeline that included strong AI goals like "carry on a casual conversation".
 In response to this and the success of expert systems, both industry and government pumped money back into 
 the field. However, the market for AI spectacularly collapsed in the late 1980s and the goals of the fifth 
 generation computer project were never fulfilled.  For the second time in 20 years, AI researchers who had 
 predicted the imminent arrival of strong AI had been shown to be fundamentally mistaken about what they could 
 accomplish. By the 1990s, AI researchers had gained a reputation for making promises they could not keep. 
 AI researchers became reluctant to make any kind of prediction at all. As AI founder John McCarthy wrote 
 in his Reply to Lighthill, "it would be a great relief to the rest of the workers in AI if the inventors of 
 new general formalisms would express their hopes in a more guarded form than has sometimes been the case." 
 and avoid any mention of "human level" artificial intelligence, for fear of being labeled a "wild-eyed dreamer."
 "At its low point, some computer scientists and software engineers avoided the term artificial intelligence for 
 fear of being viewed as wild-eyed dreamers."
 In the 1990s and early 21st century, mainstream AI has achieved a far higher degree of commercial success 
 and academic respectability by focusing on specific sub-problems where they can produce verifiable results 
 and commercial applications, such as neural nets, computer vision or data mining.  These "applied AI" applications 
 are now used extensively throughout the technology industry and research in this vein is very heavily funded 
 in both academia and industry. Most mainstream AI researchers hope that strong AI can be developed by combining 
 the programs that solve various subproblems using an integrated agent architecture, cognitive architecture 
 or subsumption architecture. 
Artificial General Intelligence(AGI) describes research that aims to create machines capable of general 
intelligent action. A proposed metric is derived from the psychometric notion of natural general intelligence 
(often denoted "g"), though no adherence to any particular theory of g is implied. The term was introduced in 
2003 . 
The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and 
Allen Newell's Soar project are regarded as within the scope of AGI. As yet, most AI researchers have 
devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated 
in the near term. However, a small number of computer scientists are active in AGI research, and many of this 
group are contributing to a series of AGI conferences. 
The research is extremely diverse and often pioneering in nature. In the introduction to his book,  
Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to 
over a century, but the consensus in the AGI research community seems to be that the timeline discussed by 
Ray Kurzweil in "The Singularity is Near" (i.e. between 2015 and 2045) is plausible.  
Most mainstream AI researchers doubt that progress will be this rapid. Organizations actively pursuing 
AGI include Adaptive AI, Artificial General Intelligence Research Institute (AGIRI), the Singularity Institute 
for Artificial Intelligence, and TexAI. 
AND Corporation has been active in this field since 1990, and has developed machine intelligence processes 
based on phase coherence principles,  having strong similarities to digital holography and QM with respect 
to quantum collapse of the wave function. Ben Goertzel is pursuing an embodied AGI through the open-source 
OpenCog project. Current code includes embodied virtual pets capable of learning simple English-language commands, 
as well as integration with real-world robotics, being done at the robotics lab of Hugo de Garis at Xiamen 
University.
A popular approach discussed to achieving general intelligent action is whole brain emulation. A low-level 
brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer 
system or another computational device. The computer runs a simulation model so faithful to the original that it 
will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.
The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of 
it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the 
same way as the original brain. Whole brain emulation is discussed in computational neuroscience and 
neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial 
intelligence research  as an approach to strong AI. Neuroimaging technologies, that could deliver the necessary 
detailed understanding, are improving rapidly. 
An extremely powerful computer would be required for a brain simulation. The human brain has a huge number 
of synapses. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections to other 
neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion). 
This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5 x 
1014 synapses (100 to 500 trillion).  An estimate of the brain's processing power, based on a simple switch model 
for neuron activity, is around 1014 (100 trillion) neuron updates per second.  Kurzweil looks at various estimates 
for the hardware required to equal the human brain and adopts a figure of 1016 computations per second (cps).
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers  regard research that 
investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander  a
rgued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.

The term "strong AI" was adopted from the name of a position in the philosophy of artificial intelligence first identified by John Searle as part of his Chinese room argument in 1980.  He wanted to distinguish between two different hypotheses about artificial intelligence:As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."
An artificial intelligence system can think and have a mind.The word "mind" is has a specific meaning for philosophers, as used in the mind body problem or the philosophy of mind An artificial intelligence system can (only) act like it thinks and has a mind.
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage, which is fundamentally different than the subject of this article, is common in academic AI research and textbooks.Among the many sources that use the term in this way are:
, Oxford University Press Dictionary of Psychology (quoted in "High Beam Encyclopedia"), MIT Encyclopedia of Cognitive Science (quoted in "AITopics") Planet Math Arguments against Strong AI (Raymond J. Mooney, University of Texas), Artificial Intelligence (Rob Kremer, University of Calgary), Minds, Math, and Machines: Penrose's thesis on consciousness (Rob Craigen, University of Manitoba), The Science and Philosophy of Consciousness Alex Green, Philosophy & AI Bernard, Will Biological Computers Enable Artificially Intelligent Machines to Become Persons? Anthony Tongen, Usenet FAQ on Strong AI
The term "strong AI" is now used to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not. Dijkstra has been quoted as saying, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." 
As Russell and Norvig write: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."  AI researchers are interested in a related statement (that some sources confusingly call "the strong AI hypothesis"):A few sources where "strong AI hypothesis" is used this way:
Strong AI Thesis, Neuroscience and the Soul An artificial intelligence system can think (or act like it thinks) as well as or better than people do.
This assertion, which hinges on the breadth and power of machine intelligence, is the subject of this article.
See also
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.  A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.  In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.
While most AI researchers believe that strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose that deny the possibility of achieving AI.  John McCarthy is one of various computer scientists who believes that human-level AI will be accomplished, but a date cannot accurately be predicted.
Conceptual limitations are another possible reason for the slowness in AI research.  AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, but consequently they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do.  A problem that is described by David Galernter is that some people assume that thinking and reasoning mean the same definition.  However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.  Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers from emulating the function of the human brain into a computer hardware.  Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously can people then overlook solutions to problematic questions.
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.  When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.  Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.  The most productive use of abstraction in AI research comes from planning and problem solving.  Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.  The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.  Emotion sums up the experiences of humans because it allows them to remember those experiences.
As David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."  This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.
